In medical image segmentation, it is often necessary to collect opinions from multiple experts to make the final decision. This clinical routine helps to mitigate individual bias. But when data is multiply annotated, standard deep learning models are often not applicable. In this paper, we propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels. Inspired by the iterative half-quadratic optimization, the proposed MrPrism will combine the multi-rater confidences assignment task and calibrated segmentation task in a recurrent manner. In this recurrent process, MrPrism can learn inter-observer variability taking into account the image semantic properties, and finally converges to a self-calibrated segmentation result reflecting the inter-observer agreement. Specifically, we propose Converging Prism (ConP) and Diverging Prism (DivP) to process the two tasks iteratively. ConP learns calibrated segmentation based on the multi-rater confidence maps estimated by DivP. DivP generates multi-rater confidence maps based on the segmentation masks estimated by ConP. The experimental results show that by recurrently running ConP and DivP, the two tasks can achieve mutual improvement. The final converged segmentation result of MrPrism outperforms state-of-the-art (SOTA) strategies on a wide range of medical image segmentation tasks.
translated by 谷歌翻译
Different from the general visual classification, some classification tasks are more challenging as they need the professional categories of the images. In the paper, we call them expert-level classification. Previous fine-grained vision classification (FGVC) has made many efforts on some of its specific sub-tasks. However, they are difficult to expand to the general cases which rely on the comprehensive analysis of part-global correlation and the hierarchical features interaction. In this paper, we propose Expert Network (ExpNet) to address the unique challenges of expert-level classification through a unified network. In ExpNet, we hierarchically decouple the part and context features and individually process them using a novel attentive mechanism, called Gaze-Shift. In each stage, Gaze-Shift produces a focal-part feature for the subsequent abstraction and memorizes a context-related embedding. Then we fuse the final focal embedding with all memorized context-related embedding to make the prediction. Such an architecture realizes the dual-track processing of partial and global information and hierarchical feature interactions. We conduct the experiments over three representative expert-level classification tasks: FGVC, disease classification, and artwork attributes classification. In these experiments, superior performance of our ExpNet is observed comparing to the state-of-the-arts in a wide range of fields, indicating the effectiveness and generalization of our ExpNet. The code will be made publicly available.
translated by 谷歌翻译
Diffusion probabilistic model (DPM) recently becomes one of the hottest topic in computer vision. Its image generation application such as Imagen, Latent Diffusion Models and Stable Diffusion have shown impressive generation capabilities, which aroused extensive discussion in the community. Many recent studies also found it useful in many other vision tasks, like image deblurring, super-resolution and anomaly detection. Inspired by the success of DPM, we propose the first DPM based model toward general medical image segmentation tasks, which we named MedSegDiff. In order to enhance the step-wise regional attention in DPM for the medical image segmentation, we propose dynamic conditional encoding, which establishes the state-adaptive conditions for each sampling step. We further propose Feature Frequency Parser (FF-Parser), to eliminate the negative effect of high-frequency noise component in this process. We verify MedSegDiff on three medical segmentation tasks with different image modalities, which are optic cup segmentation over fundus images, brain tumor segmentation over MRI images and thyroid nodule segmentation over ultrasound images. The experimental results show that MedSegDiff outperforms state-of-the-art (SOTA) methods with considerable performance gap, indicating the generalization and effectiveness of the proposed model.
translated by 谷歌翻译
事实证明,大脑时代是与认知性能和脑部疾病相关的表型。实现准确的脑年龄预测是优化预测的脑时代差异作为生物标志物的必要先决条件。作为一种综合的生物学特征,很难使用特征工程和局部处理的模型来准确利用大脑时代,例如局部卷积和经常性操作,这些操作一次是一次处理一个本地社区。取而代之的是,视觉变形金刚学习斑块令牌的全球专注相互作用,引入了较少的电感偏见和建模长期依赖性。就此而言,我们提出了一个新的网络,用于学习大脑年龄,以全球和局部依赖性解释,其中相应的表示由连续排列的变压器(SPT)和卷积块捕获。 SPT带来了计算效率,并通过从不同视图中连续编码2D切片间接地定位3D空间信息。最后,我们收集了一大批22645名受试者,年龄范围从14到97,我们的网络在一系列深度学习方法中表现最好,在验证集中产生了平均绝对错误(MAE)为2.855,而在独立方面产生了2.911测试集。
translated by 谷歌翻译
在医学图像上,许多组织/病变可能模棱两可。这就是为什么一群临床专家通常会注释医疗细分以减轻个人偏见的原因。但是,这种临床常规也为机器学习算法的应用带来了新的挑战。如果没有确定的基础真相,将很难训练和评估深度学习模型。当从不同的级别收集注释时,一个共同的选择是多数票。然而,这样的策略忽略了分级专家之间的差异。在本文中,我们考虑使用校准的观察者间的不确定性来预测分割的任务。我们注意到,在临床实践中,医学图像分割通常用于帮助疾病诊断。受到这一观察的启发,我们提出了诊断优先的原则,该原则是将疾病诊断作为校准观察者间分段不确定性的标准。遵循这个想法,提出了一个名为诊断的诊断框架(DIFF)以估算从原始图像中进行诊断,从原始图像进行诊断。特别是,DIFF将首先学会融合多论者分段标签,以最大程度地提高单个地面真相疾病诊断表现。我们将融合的地面真相称为诊断第一基地真实(DF-GT)。我们验证了DIFF对三个不同的医学分割任务的有效性:对眼底图像的OD/OC分割,超声图像上的甲状腺结节分割以及皮肤镜图像上的皮肤病变分割。实验结果表明,拟议的DIFF能够显着促进相应的疾病诊断,这表现优于先前的最先进的多评论者学习方法。
translated by 谷歌翻译
临床上,病变/组织的准确注释可以显着促进疾病诊断。例如,对眼底图像的视盘/杯/杯(OD/OC)的分割将有助于诊断青光眼诊断,皮肤镜图像上皮肤病变的分割有助于黑色素瘤诊断等。随着深度学习技术的发展,广泛的方法证明了病变/组织分割还可以促进自动疾病诊断模型。但是,现有方法是有限的,因为它们只能捕获图像中的静态区域相关性。受视觉变压器的全球和动态性质的启发,在本文中,我们提出了分割辅助诊断变压器(SeaTrans),以将分割知识转移到疾病诊断网络中。具体而言,我们首先提出了一种不对称的多尺度相互作用策略,以将每个单个低级诊断功能与多尺度分割特征相关联。然后,采用了一种称为海块的有效策略,以通过相关的分割特征使诊断特征生命。为了模拟分割诊断的相互作用,海块首先根据分段信息通过编码器嵌入诊断功能,然后通过解码器将嵌入的嵌入回到诊断功能空间中。实验结果表明,关于几种疾病诊断任务的海洋侵蚀超过了广泛的最新(SOTA)分割辅助诊断方法。
translated by 谷歌翻译
眼底图像的视盘(OD)和视杯(OC)的分割是青光眼诊断的重要基本任务。在临床实践中,通常有必要从多位专家那里收集意见,以获得最终的OD/OC注释。这种临床常规有助于减轻单个偏见。但是,当数据乘以注释时,标准深度学习模型将不适用。在本文中,我们提出了一个新型的神经网络框架,以从多评价者注释中学习OD/OC分割。分割结果通过迭代优化多评价专家的估计和校准OD/OC分割来自校准。这样,提出的方法可以实现这两个任务的相互改进,并最终获得精制的分割结果。具体而言,我们提出分化模型(DIVM)和收敛模型(CONM)分别处理这两个任务。 CONM基于DIVM提供的多评价专家图的原始图像。 DIVM从CONM提供的分割掩码中生成多评价者专家图。实验结果表明,通过经常运行CONM和DIVM,可以对结果进行自校准,从而超过一系列最新的(SOTA)多评价者分割方法。
translated by 谷歌翻译
预训练对于深度学习模型的表现至关重要,尤其是在有限的培训数据的医学图像分析任务中。但是,现有的预训练方法是不灵活的,因为其他网络体系结构不能重复使用一个模型的预训练权重。在本文中,我们提出了一个体系结构 - 无限量化器,它可以在一次预先训练后才良好地初始化任何给定的网络体系结构。所提出的初始器是一个超网络,将下游体系结构作为输入图,并输出相应体系结构的初始化参数。我们通过多种医学成像方式,尤其是在数据限制的领域中,通过广泛的实验结果来展示高档化器的有效性和效率。此外,我们证明,可以将所提出的算法重复使用,作为同一模态的任何下游体系结构和任务(分类和分割)的有利的插件初始化器。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
随着深度学习技术的发展,从底眼图像中提出了越来越多的方法对视盘和杯子(OD/OC)进行分割。在临床上,多位临床专家通常会注释OD/OC细分以减轻个人偏见。但是,很难在多个标签上训练自动化的深度学习模型。解决该问题的一种普遍做法是多数投票,例如,采用多个标签的平均值。但是,这种策略忽略了医学专家的不同专家。通过观察到的观察,即在临床上通常将OD/OC分割用于青光眼诊断,在本文中,我们提出了一种新的策略,以通过青光眼诊断性能融合多评分者OD/OC分割标签。具体而言,我们通过细心的青光眼诊断网络评估每个评估者的专业性。对于每个评估者,其对诊断的贡献将被反映为专家图。为了确保对不同青光眼诊断模型的专家图是一般性的,我们进一步提出了专家生成器(EXPG),以消除优化过程中的高频组件。基于获得的专家图,多评价者标签可以融合为单个地面真相,我们将其称为诊断第一基地真相(diagfirstgt)。实验结果表明,通过将diagfirstgt用作地面真相,OD/OC分割网络将预测具有优质诊断性能的面膜。
translated by 谷歌翻译